Search Results: "ogi"

23 December 2023

Russ Allbery: Review: Bookshops & Bonedust

Review: Bookshops & Bonedust, by Travis Baldree
Series: Legends & Lattes #2
Publisher: Tor
Copyright: 2023
ISBN: 1-250-88611-2
Format: Kindle
Pages: 337
Bookshops & Bonedust is a prequel to the cozy fantasy Legends & Lattes. You can read them in either order, although the epilogue of Bookshops & Bonedust spoils (somewhat guessable) plot developments in Legends & Lattes. Viv is a new member of the mercenary troop Rackam's Ravens and is still possessed of more enthusiasm than sense. As the story opens, she charges well ahead of her allies and nearly gets killed by a pike through the leg. She survives, but her leg needs time to heal and she is not up to the further pursuit of a necromancer. Rackam pays for a room and a doctor in the small seaside town of Murk and leaves her there to recuperate. The Ravens will pick her up when they come back through town, whenever that is. Viv is very quickly bored out of her skull. On a whim, and after some failures to find something else to occupy her, she tries a run-down local bookstore and promptly puts her foot through the boardwalk outside it. That's the start of an improbable friendship with the proprietor, a rattkin named Fern with a knack for book recommendations and a serious cash flow problem. Viv, being Viv, soon decides to make herself useful. The good side and bad side of this book are the same: it's essentially the same book as Legends & Lattes, but this time with a bookstore. There's a medieval sword and sorcery setting, a wide variety of humanoid species, a local business that needs love and attention (this time because it's failing instead of new), a lurking villain, an improbable store animal (this time a gryphlet that I found less interesting than the cat of the coffee shop), and a whole lot of found family. It turns out I was happy to read that story again, and there were some things I liked better in this version. I find bookstores more interesting than coffee shops, and although Viv and Fern go through a similar process of copying features of a modern bookstore, this felt less strained than watching Viv reinvent the precise equipment and menu of a modern coffee shop in a fantasy world. Also, Fern is an absolute delight, probably my favorite character in either of the books. I love the way that she uses book recommendations as a way of asking questions and guessing at answers about other people. As with the first book, Baldree's world-building is utterly unconcerned with trying to follow the faux-medieval conventions of either sword and sorcery or D&D-style role-playing games. On one hand, I like this; most of that so-called medievalism is nonsense anyway, and there's no reason why fantasy with D&D-style species diversity should be set in a medieval world. On the other hand, this world seems exactly like a US small town except the tavern also has rooms for rent, there are roving magical armies, and everyone fights with swords for some reason. It feels weirdly anachronistic, and I can't tell if that's because I've been brainwashed into thinking fantasy has to be medievaloid or if it's a true criticism of the book. I was reminded somewhat of reading Jack McDevitt's SF novels, which are supposedly set in the far future but are indistinguishable from 1980s suburbia except with flying cars. The other oddity with this book is that the reader of the series knows Viv isn't going to stay. This is the problem with writing a second iteration of this story as a prequel. I see why Baldree did it the story wouldn't have worked if Viv were already established but it casts a bit of a pall over the cheeriness of the story. Baldree to his credit confronts this directly, weaves it into the relationships, and salvages it a bit more in the epilogue, but it gave the story a sort of preemptive wistfulness that was at odds with how I wanted to read it. But, despite that, the strength of this book are the characters. Viv is a good person who helps where she can, which sounds like a simple thing but is so restful to read about. This book features her first meeting with the gnome Gallina, who is always a delight. There are delicious baked goods from a dwarf, a grumpy doctor, a grumpier city guard, and a whole cast of people who felt complicated and normal and essentially decent. I'm not sure the fantasy elements do anything for this book, or this series, other than marketing and the convenience of a few plot devices. Even though one character literally disappears into a satchel, it felt like Baldree could have written roughly the same story as a contemporary novel without a hint of genre. But that's not really a complaint, since the marketing works. I would not have read this series if it had been contemporary novels, and I thoroughly enjoyed it. It's a slice of life novel about kind and decent people for readers who are bored by contemporary settings and would rather read fantasy. Works for me. I'm hoping Baldree finds other stories, since I'm not sure I want to read this one several more times, but twice was not too much. If you liked Legends & Lattes and are thinking "how can I get more of that," here's the book for you. If you haven't read Legends & Lattes, I think I would recommend reading this one first. It does many of the same things, it's a bit more polished, and then you can read Viv's adventures in internal chronological order. Rating: 8 out of 10

21 December 2023

Russ Allbery: Review: The Box

Review: The Box, by Marc Levinson
Publisher: Princeton University Press
Copyright: 2006, 2008
Printing: 2008
ISBN: 0-691-13640-8
Format: Trade paperback
Pages: 278
The shipping container as we know it is only about 65 years old. Shipping things in containers is obviously much older; we've been doing that for longer than we've had ships. But the standardized metal box, set on a rail car or loaded with hundreds of its indistinguishable siblings into an enormous, specially-designed cargo ship, became economically significant only recently. Today it is one of the oft-overlooked foundations of global supply chains. The startlingly low cost of container shipping is part of why so much of what US consumers buy comes from Asia, and why most complex machinery is assembled in multiple countries from parts gathered from a dizzying variety of sources. Marc Levinson's The Box is a history of container shipping, from its (arguable) beginnings in the trailer bodies loaded on Pan-Atlantic Steamship Corporation's Ideal-X in 1956 to just-in-time international supply chains in the 2000s. It's a popular history that falls on the academic side, with a full index and 60 pages of citations and other notes. (Per my normal convention, those pages aren't included in the sidebar page count.) The Box is organized mostly chronologically, but Levinson takes extended detours into labor relations and container standardization at the appropriate points in the timeline. The book is very US-centric. Asian, European, and Australian shipping is discussed mostly in relation to trade with the US, and Africa is barely mentioned. I don't have the background to know whether this is historically correct for container shipping or is an artifact of Levinson's focus. Many single-item popular histories focus on something that involves obvious technological innovation (paint pigments) or deep cultural resonance (salt) or at least entertaining quirkiness (punctuation marks, resignation letters). Shipping containers are important but simple and boring. The least interesting chapter in The Box covers container standardization, in which a whole bunch of people had boring meetings, wrote some things done, discovered many of the things they wrote down were dumb, wrote more things down, met with different people to have more meetings, published a standard that partly reflected the fixations of that one guy who is always involved in standards discussions, and then saw that standard be promptly ignored by the major market players. You may be wondering if that describes the whole book. It doesn't, but not because of the shipping containers. The Box is interesting because the process of economic change is interesting, and container shipping is almost entirely about business processes rather than technology. Levinson starts the substance of the book with a description of shipping before standardized containers. This was the most effective, and probably the most informative, chapter. Beyond some vague ideas picked up via cultural osmosis, I had no idea how cargo shipping worked. Levinson gives the reader a memorable feel for the sheer amount of physical labor involved in loading and unloading a ship with mixed cargo (what's called "breakbulk" cargo to distinguish it from bulk cargo like coal or wheat that fills an entire hold). It's not just the effort of hauling barrels, bales, or boxes with cranes or raw muscle power, although that is significant. It's also the need to touch every piece of cargo to move it, inventory it, warehouse it, and then load it on a truck or train. The idea of container shipping is widely attributed, including by Levinson, to Malcom McLean, a trucking magnate who became obsessed with the idea of what we now call intermodal transport: using the same container for goods on ships, railroads, and trucks so that the contents don't have to be unpacked and repacked at each transfer point. Levinson uses his career as an anchor for the story, from his acquisition of Pan-American Steamship Corporation to pursue his original idea (backed by private equity and debt, in a very modern twist), through his years running Sea-Land as the first successful major container shipper, and culminating in his disastrous attempted return to shipping by acquiring United States Lines. I am dubious of Great Man narratives in history books, and I think Levinson may be overselling McLean's role. Container shipping was an obvious idea that the industry had been talking about for decades. Even Levinson admits that, despite a few gestures at giving McLean central credit. Everyone involved in shipping understood that cargo handling was the most expensive and time-consuming part, and that if one could minimize cargo handling at the docks by loading and unloading full containers that didn't have to be opened, shipping costs would be much lower (and profits higher). The idea wasn't the hard part. McLean was the first person to pull it off at scale, thanks to some audacious economic risks and a willingness to throw sharp elbows and play politics, but it seems likely that someone else would have played that role if McLean hadn't existed. Container shipping didn't happen earlier because achieving that cost savings required a huge expenditure of capital and a major disruption of a transportation industry that wasn't interested in being disrupted. The ships had to be remodeled and eventually replaced; manufacturing had to change; railroad and trucking in theory had to change (in practice, intermodal transport; McLean's obsession, didn't happen at scale until much later); pricing had to be entirely reworked; logistical tracking of goods had to be done much differently; and significant amounts of extremely expensive equipment to load and unload heavy containers had to be designed, built, and installed. McLean's efforts proved the cost savings was real and compelling, but it still took two decades before the shipping industry reconstructed itself around containers. That interim period is where this history becomes a labor story, and that's where Levinson's biases become somewhat distracting. In the United States, loading and unloading of cargo ships was done by unionized longshoremen through a bizarre but complex and long-standing system of contract hiring. The cost savings of container shipping comes almost completely from the loss of work for longshoremen. It's a classic replacement of labor with capital; the work done by gangs of twenty or more longshoreman is instead done by a single crane operator at much higher speed and efficiency. The longshoreman unions therefore opposed containerization and launched numerous strikes and other labor actions to delay use of containers, force continued hiring that containers made unnecessary, or win buyouts and payoffs for current longshoremen. Levinson is trying to write a neutral history and occasionally shows some sympathy for longshoremen, but they still get the Luddite treatment in this book: the doomed reactionaries holding back progress. Longshoremen had a vigorous and powerful union that won better working conditions structured in ways that look absurd to outsiders, such as requiring that ships hire twice as many men as necessary so that half of them could get paid while not working. The unions also had a reputation for corruption that Levinson stresses constantly, and theft of breakbulk cargo during loading and warehousing was common. One of the interesting selling points for containers was that lossage from theft during shipping apparently decreased dramatically. It's obvious that the surface demand of the longshoremen unions, that either containers not be used or that just as many manual laborers be hired for container shipping as for earlier breakbulk shipping, was impossible, and that the profession as it existed in the 1950s was doomed. But beneath those facts, and the smoke screen of Levinson's obvious distaste for their unions, is a real question about what society owes workers whose jobs are eliminated by major shifts in business practices. That question of fairness becomes more pointed when one realizes that this shift was massively subsidized by US federal and local governments. McLean's Sea-Land benefited from direct government funding and subsidized navy surplus ships, massive port construction in New Jersey with public funds, and a sweetheart logistics contract from the US military to supply troops fighting the Vietnam War that was so generous that the return voyage was free and every container Sea-Land picked up from Japanese ports was pure profit. The US shipping industry was heavily government-supported, particularly in the early days when the labor conflicts were starting. Levinson notes all of this, but never draws the contrast between the massive support for shipping corporations and the complete lack of formal support for longshoremen. There are hard ethical questions about what society owes displaced workers even in a pure capitalist industry transformation, and this was very far from pure capitalism. The US government bankrolled large parts of the growth of container shipping, but the only way that longshoremen could get part of that money was through strikes to force payouts from private shipping companies. There are interesting questions of social and ethical history here that would require careful disentangling of the tendency of any group to oppose disruptive change and fairness questions of who gets government support and who doesn't. They will have to wait for another book; Levinson never mentions them. There were some things about this book that annoyed me, but overall it's a solid work of popular history and deserves its fame. Levinson's account is easy to follow, specific without being tedious, and backed by voluminous notes. It's not the most compelling story on its own merits; you have to have some interest in logistics and economics to justify reading the entire saga. But it's the sort of history that gives one a sense of the fractal complexity of any area of human endeavor, and I usually find those worth reading. Recommended if you like this sort of thing. Rating: 7 out of 10

19 December 2023

Jonathan Dowland: William Basinski, Gateshead, 2022

I was looking over the list of live music I'd seen this year and realised that avante-garde composer William Basinski was actually last year and I had forgotten to write about it! In November 2022, Basinski headlined a night of performances which otherwise featured folk from the venue's "Arists in Residence" programme, with some affiliation to Newcastle's DIY music scene. Unfortunately we arrived too late to catch any of the other acts: partly because of the venue's sometimes doggiest insistence that people can only enter or leave the halls during intervals, and partly because the building works surrounding it had made the southern entrance effectively closed, so we had to walk to the north side of the building to get in1. Basinski was performing work from Lamentations. Basinski himself presented very unexpectedly to how I imagined him: he's got the Texas drawl, mediated through a fair amount of time spent in New York; very camp, in a glittery top; he kicked off the gig complaining about how tired he was, before a mini rant about the state of the world, riffing on a title from the album: Please, This Shit Has Got To Stop. We were in Hall 1, the larger of the two, and it was sparsely attended; a few people walked out mid performance. My gig-buddy Rob (a useful barometer for me on how things have gone) remarked that it was one of the most unique and unusual gigs he'd been to. I recognised snatches of the tracks from the album, but I'm hard-placed to name or sequence them, and they flowed into each other. I don't know how much of what we were hearing was "live" or what, if anything, was being decided during the performance, but Basinski's set-up included what looked like archaic tape equipment, with exposed loops of tape running between spools that could be interfered with by other tools. The encore was a unique, unreleased mix of Melancholia (II), which (making no apologies) Basinski hit play on before retiring backstage. I didn't take any photos. From memory, I think the venue had specifically stated filming or photos were not allowed for this performance. People at prior shows in New York and London filmed some of their shows; which were substantially similar: I've included embeds of them above. Lots of Basinski's work is on Bandcamp; the three pieces I particularly enjoy are Lamentations, Lamentations by William Basinski On Time Out of Time, On Time Out of Time by William Basinski and his best-known work, The Disintegration Loops. The Disintegration Loops by William Basinski

  1. I don't want to speak ill of the venue, though: The Sage, as it was, and the Glasshouse, as it is now known, has ended up being the venue I've attended most this year (2023), and it's such a civilised place: plenty of bars, great drinks selection (both alcoholic and not, hot and cold), loads of clean toilets, a free cloakroom, fantastic accoustics, polite staff; the list goes on.

14 December 2023

Arturo Borrero Gonz lez: OpenTofu: handcrafted include-file mechanism with YAML

Post logo I recently started playing with Terraform/OpenTofu almost on a daily basis. The other day I was working with Amazon Managed Prometheus (or AMP), and wanted to define prometheus alert rules on YAML files. I decided that I needed a way to put the alerts on a bunch of files, and then load them by the declarative code, on the correct AMP workspace. I came up with this code pattern that I m sharing here, for my future reference, and in case it is interesting to someone else. The YAML file where I specify the AMP workspace, and where the alert rule files live:
---
alert_files:
  my_alerts_production:
    amp:
      workspace: "production"
    files: "alert_rules/production/*.yaml"
  my_alerts_staging:
    amp:
      workspace: "staging"
    files: "alert_rules/staging/*.yaml"
Note the files entry contains a file pattern. I will later expand the pattern using the fileset() function. Each rule file would be something like this:
---
name: "my_rule_namespace"
rule_data:  
  # this is prometheus-specific config
  groups:
    - name: "example_alert_group"
      rules:
      - alert: Example_Alert_Cpu
        # just arbitrary values, to produce an example alert
        expr: avg(rate(ecs_cpu_seconds_total container=~"something" [2m])) > 1
        for: 10s
        annotations:
          summary: "CPU usage is too high"
          description: "The container average CPU usage is too high."
I m interested in the data structure mutating into something similar to this:
---
alert_files:
  my_alerts_production:
    amp:
      workspace: "production"
    alerts_data:
      - name: rule_namespace_1
        rule_data:  
          # actual alert definition here
          [..]
      - name: rule_namespace_2
        rule_data:  
          # actual alert definition here
          [..]
  my_alerts_staging:
    amp:
      workspace: "staging"
    alerts_data:
      - name: rule_namespace_1
        rule_data:  
          # actual alert definition here
          [..]
      - name: rule_namespace_2
        rule_data:  
          # actual alert definition here
          [..]
This is the algorithm that does the trick:
locals  
  alerts_config =  
    for x, y in  
      for k, v in local.config.alert_files :
      k =>  
        amp : (v.amp),
        files : fileset("", v.files)
       
        : x =>  
      amp : (y.amp),
      alertmanager_data : [
        for z in(y.files) :
        yamldecode(file(z))
      ]
     
   
 
Because the declarative nature of the Terraform/OpenTofu language, I needed to implement 3 different for loops. Each loop reads the map and transforms it in some way, passing the result into the next loop. A bit convoluted if you ask me. To explain the logic, I think it makes more sense to read it from inside out. First loop:
    for k, v in local.config.alert_files :
    k =>  
        amp : (v.amp),
        files : fileset("", v.files)
     
This loop iterates the input YAML map in key-value pairs, remapping each amp entry, and expanding the file globs using the fileset() into a temporal files entry. Second loop:
    for x, y in  
        # previous fileset() loop
        : x =>  
      amp : (y.amp),
      alertmanager_data : [
        # yamldecode() loop
      ]
     
This intermediate loop is responsible for building the final data structure. It iterates the previous fileset() loop to remap it calling the next loop, the yamldecode() one. Note how the amp entry is being rebuilt in each remap (first loop and this one), otherwise we would lose it! Third loop:
    alertmanager_data : [
        for z in(y.files) :
        yamldecode(file(z))
    ]
And finally, this is maybe the easiest loop of the 3, we iterate the temporal file entry that was created in the first loop, calling yamldecode() for each of the file names generated by fileset(). The resulting data structure should allow you to easily create resources later in a for_each loop.

13 December 2023

Jonathan Dowland: equivalence problems with StreamGraph

I've been tackling an equivalence problem with rewritten programs in StrIoT, our proof-of-concept stream-processing system. The StrIoT Logical Optimiser applies a set of rewrite rules to a stream-processing program, generating a set of variants that can be reasoned about, ranked, and deployed. The problem I've been tackling is that a variant may appear to be semantically equivalent to another, but compare (with ==) as distinct. The issue relates to the design of our data-type representing programs, in particular, a consequence of our choice to outsource the structural aspect to a 3rd-party library (Algebra.Graph). The Graph library deems nodes that compare as equivalent (again with ==) to be the same. Since a stream-processing program may contain many operators which are equivalent, but distinct, we needed to add a field to our payload type to differentiate them: so we opted for an Integer field, vertexId (something I've described as a "wart" elsewhere) Here's a simplified example of our existing payload type, StreamVertex:
data StreamVertex = StreamVertex
      vertexId   :: Int
    , operator   :: StreamOperator
    , parameters :: [ExpQ]
    , intype     :: String
    , outtype    :: String
     
A rewrite rule might introduce or eliminate operators from a stream-processing program. For example, consider the rule which "hoists" a filter upstream from a merge operator. In pseudo-Haskell,
streamFilter p . streamMerge [a , b, ...]
=>
streamMerge [ streamFilter p a
            , streamFilter p b
            , ...]
The original streamFilter is removed, and new streamFilters are introduced, one per stream arriving at streamMerge. In general, rules may need to synthesise new operators, and thus new vertexIds. Another rewrite rule might perform the reverse operation. But the individual rules operate in isolation: and so, the program variant that results after applying a rule and then applying an inverse rule may not have the same vertexIds, or the same order of vertexIds, as the original program. I thought of the outline of two possible solutions to this. "well-numbered" StreamGraphs The first was to encode (and enforce) some rules about how vertexIds are used. If they always began from (say) 1, and were strictly-ascending from the source operator(s), and rewrite rules guaranteed that a "well numbered" input would be "well numbered" after rewriting, this would be sufficient to rule out a rewritten-but-semantically-equivalent program being considered distinct. The trouble with this approach is using properties of a numerical system built around vertexId as a stand-in for the real structural problem. I was not sure I could prove both that the stand-in system was sound and that it was a proper analogue for the underlying structural issue. It feels to me more that the choice to use an external library to encode the structure of a stream-processing program was the issue: the structure itself is a fundamental part of the semantics of the program. What if we had encoded the structure of programs within the same data-type? alternative data-type StrIoT programs are trees. The root is the sink node: there is always exactly one. There can be multiple source (leaf nodes), but they always converge. Operators can have multiple inputs (including zero). The root node has no output, but all other operators have exactly one. I explored transforming StreamVertex into a tree by adding a field representing incoming streams, and dispensing with Graph and vertexId. Something like this
data StreamProg = StreamProg StreamOperator [Exp] String String [StreamProg]
A uni-directional transformation from Graph StreamVertex to StreamProg is all that's needed to implement something like ==, so we don't need to keep track of vertexId mappings. Unfortunately, we can't fix the actual Eq (Graph StreamVertex) implementation this way: it delegates to Eq StreamVertex, and we just don't have enough information to fix the problem at that level. But, we can write a separate graphEq and use that instead where we need to. could I go further? Spoiler: I haven't. But I've been sorely tempted. We still have a separate StreamOperator type, which it would be nice to fold in; and we still have to use a list around the incoming nodes, since different operators accept different numbers of incoming streams. It would be better to encode the correct valences in the type. In 2020 I explored iteratively reducing the StreamVertex data-type to try and get it as close as possible to the ideal end-user API: simple functions. I wrote about one step along that path in Template Haskell and Stream-processing programs, but concluded that, since this was not my main PhD focus, I wouldn't go further. But it was nagging at my subconcious ever since. I allowed myself a couple of days exploring some advanced concepts including typed Template Haskell (that has had some developments since 2020), generalised abstract data types (GADTs) and more generic programming to see what could be achieved. I'll summarise all that in the next blog post.

Melissa Wen: 15 Tips for Debugging Issues in the AMD Display Kernel Driver

A self-help guide for examining and debugging the AMD display driver within the Linux kernel/DRM subsystem. It s based on my experience as an external developer working on the driver, and are shared with the goal of helping others navigate the driver code. Acknowledgments: These tips were gathered thanks to the countless help received from AMD developers during the driver development process. The list below was obtained by examining open source code, reviewing public documentation, playing with tools, asking in public forums and also with the help of my former GSoC mentor, Rodrigo Siqueira.

Pre-Debugging Steps: Before diving into an issue, it s crucial to perform two essential steps: 1) Check the latest changes: Ensure you re working with the latest AMD driver modifications located in the amd-staging-drm-next branch maintained by Alex Deucher. You may also find bug fixes for newer kernel versions on branches that have the name pattern drm-fixes-<date>. 2) Examine the issue tracker: Confirm that your issue isn t already documented and addressed in the AMD display driver issue tracker. If you find a similar issue, you can team up with others and speed up the debugging process.

Understanding the issue: Do you really need to change this? Where should you start looking for changes? 3) Is the issue in the AMD kernel driver or in the userspace?: Identifying the source of the issue is essential regardless of the GPU vendor. Sometimes this can be challenging so here are some helpful tips:
  • Record the screen: Capture the screen using a recording app while experiencing the issue. If the bug appears in the capture, it s likely a userspace issue, not the kernel display driver.
  • Analyze the dmesg log: Look for error messages related to the display driver in the dmesg log. If the error message appears before the message [drm] Display Core v... , it s not likely a display driver issue. If this message doesn t appear in your log, the display driver wasn t fully loaded and you will see a notification that something went wrong here.
4) AMD Display Manager vs. AMD Display Core: The AMD display driver consists of two components:
  • Display Manager (DM): This component interacts directly with the Linux DRM infrastructure. Occasionally, issues can arise from misinterpretations of DRM properties or features. If the issue doesn t occur on other platforms with the same AMD hardware - for example, only happens on Linux but not on Windows - it s more likely related to the AMD DM code.
  • Display Core (DC): This is the platform-agnostic part responsible for setting and programming hardware features. Modifications to the DC usually require validation on other platforms, like Windows, to avoid regressions.
5) Identify the DC HW family: Each AMD GPU has variations in its hardware architecture. Features and helpers differ between families, so determining the relevant code for your specific hardware is crucial.
  • Find GPU product information in Linux/AMD GPU documentation
  • Check the dmesg log for the Display Core version (since this commit in Linux kernel 6.3v). For example:
    • [drm] Display Core v3.2.241 initialized on DCN 2.1
    • [drm] Display Core v3.2.237 initialized on DCN 3.0.1

Investigating the relevant driver code: Keep from letting unrelated driver code to affect your investigation. 6) Narrow the code inspection down to one DC HW family: the relevant code resides in a directory named after the DC number. For example, the DCN 3.0.1 driver code is located at drivers/gpu/drm/amd/display/dc/dcn301. We all know that the AMD s shared code is huge and you can use these boundaries to rule out codes unrelated to your issue. 7) Newer families may inherit code from older ones: you can find dcn301 using code from dcn30, dcn20, dcn10 files. It s crucial to verify which hooks and helpers your driver utilizes to investigate the right portion. You can leverage ftrace for supplemental validation. To give an example, it was useful when I was updating DCN3 color mapping to correctly use their new post-blending color capabilities, such as: Additionally, you can use two different HW families to compare behaviours. If you see the issue in one but not in the other, you can compare the code and understand what has changed and if the implementation from a previous family doesn t fit well the new HW resources or design. You can also count on the help of the community on the Linux AMD issue tracker to validate your code on other hardware and/or systems. This approach helped me debug a 2-year-old issue where the cursor gamma adjustment was incorrect in DCN3 hardware, but working correctly for DCN2 family. I solved the issue in two steps, thanks for community feedback and validation: 8) Check the hardware capability screening in the driver: You can currently find a list of display hardware capabilities in the drivers/gpu/drm/amd/display/dc/dcn*/dcn*_resource.c file. More precisely in the dcn*_resource_construct() function. Using DCN301 for illustration, here is the list of its hardware caps:
	/*************************************************
	 *  Resource + asic cap harcoding                *
	 *************************************************/
	pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE;
	pool->base.pipe_count = pool->base.res_cap->num_timing_generator;
	pool->base.mpcc_count = pool->base.res_cap->num_timing_generator;
	dc->caps.max_downscale_ratio = 600;
	dc->caps.i2c_speed_in_khz = 100;
	dc->caps.i2c_speed_in_khz_hdcp = 5; /*1.4 w/a enabled by default*/
	dc->caps.max_cursor_size = 256;
	dc->caps.min_horizontal_blanking_period = 80;
	dc->caps.dmdata_alloc_size = 2048;
	dc->caps.max_slave_planes = 2;
	dc->caps.max_slave_yuv_planes = 2;
	dc->caps.max_slave_rgb_planes = 2;
	dc->caps.is_apu = true;
	dc->caps.post_blend_color_processing = true;
	dc->caps.force_dp_tps4_for_cp2520 = true;
	dc->caps.extended_aux_timeout_support = true;
	dc->caps.dmcub_support = true;
	/* Color pipeline capabilities */
	dc->caps.color.dpp.dcn_arch = 1;
	dc->caps.color.dpp.input_lut_shared = 0;
	dc->caps.color.dpp.icsc = 1;
	dc->caps.color.dpp.dgam_ram = 0; // must use gamma_corr
	dc->caps.color.dpp.dgam_rom_caps.srgb = 1;
	dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1;
	dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1;
	dc->caps.color.dpp.dgam_rom_caps.pq = 1;
	dc->caps.color.dpp.dgam_rom_caps.hlg = 1;
	dc->caps.color.dpp.post_csc = 1;
	dc->caps.color.dpp.gamma_corr = 1;
	dc->caps.color.dpp.dgam_rom_for_yuv = 0;
	dc->caps.color.dpp.hw_3d_lut = 1;
	dc->caps.color.dpp.ogam_ram = 1;
	// no OGAM ROM on DCN301
	dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
	dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
	dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
	dc->caps.color.dpp.ogam_rom_caps.pq = 0;
	dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
	dc->caps.color.dpp.ocsc = 0;
	dc->caps.color.mpc.gamut_remap = 1;
	dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
	dc->caps.color.mpc.ogam_ram = 1;
	dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
	dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
	dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
	dc->caps.color.mpc.ogam_rom_caps.pq = 0;
	dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
	dc->caps.color.mpc.ocsc = 1;
	dc->caps.dp_hdmi21_pcon_support = true;
	/* read VBIOS LTTPR caps */
	if (ctx->dc_bios->funcs->get_lttpr_caps)  
		enum bp_result bp_query_result;
		uint8_t is_vbios_lttpr_enable = 0;
		bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
		dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
	 
	if (ctx->dc_bios->funcs->get_lttpr_interop)  
		enum bp_result bp_query_result;
		uint8_t is_vbios_interop_enabled = 0;
		bp_query_result = ctx->dc_bios->funcs->get_lttpr_interop(ctx->dc_bios, &is_vbios_interop_enabled);
		dc->caps.vbios_lttpr_aware = (bp_query_result == BP_RESULT_OK) && !!is_vbios_interop_enabled;
	 
Keep in mind that the documentation of color capabilities are available at the Linux kernel Documentation.

Understanding the development history: What has brought us to the current state? 9) Pinpoint relevant commits: Use git log and git blame to identify commits targeting the code section you re interested in. 10) Track regressions: If you re examining the amd-staging-drm-next branch, check for regressions between DC release versions. These are defined by DC_VER in the drivers/gpu/drm/amd/display/dc/dc.h file. Alternatively, find a commit with this format drm/amd/display: 3.2.221 that determines a display release. It s useful for bisecting. This information helps you understand how outdated your branch is and identify potential regressions. You can consider each DC_VER takes around one week to be bumped. Finally, check testing log of each release in the report provided on the amd-gfx mailing list, such as this one Tested-by: Daniel Wheeler:

Reducing the inspection area: Focus on what really matters. 11) Identify involved HW blocks: This helps isolate the issue. You can find more information about DCN HW blocks in the DCN Overview documentation. In summary:
  • Plane issues are closer to HUBP and DPP.
  • Blending/Stream issues are closer to MPC, OPP and OPTC. They are related to DRM CRTC subjects.
This information was useful when debugging a hardware rotation issue where the cursor plane got clipped off in the middle of the screen. Finally, the issue was addressed by two patches: 12) Issues around bandwidth (glitches) and clocks: May be affected by calculations done in these HW blocks and HW specific values. The recalculation equations are found in the DML folder. DML stands for Display Mode Library. It s in charge of all required configuration parameters supported by the hardware for multiple scenarios. See more in the AMD DC Overview kernel docs. It s a math library that optimally configures hardware to find the best balance between power efficiency and performance in a given scenario. Finding some clk variables that affect device behavior may be a sign of it. It s hard for a external developer to debug this part, since it involves information from HW specs and firmware programming that we don t have access. The best option is to provide all relevant debugging information you have and ask AMD developers to check the values from your suspicions.
  • Do a trick: If you suspect the power setup is degrading performance, try setting the amount of power supplied to the GPU to the maximum and see if it affects the system behavior with this command: sudo bash -c "echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level"
I learned it when debugging glitches with hardware cursor rotation on Steam Deck. My first attempt was changing the clock calculation. In the end, Rodrigo Siqueira proposed the right solution targeting bandwidth in two steps:

Checking implicit programming and hardware limitations: Bring implicit programming to the level of consciousness and recognize hardware limitations. 13) Implicit update types: Check if the selected type for atomic update may affect your issue. The update type depends on the mode settings, since programming some modes demands more time for hardware processing. More details in the source code:
/* Surface update type is used by dc_update_surfaces_and_stream
 * The update type is determined at the very beginning of the function based
 * on parameters passed in and decides how much programming (or updating) is
 * going to be done during the call.
 *
 * UPDATE_TYPE_FAST is used for really fast updates that do not require much
 * logical calculations or hardware register programming. This update MUST be
 * ISR safe on windows. Currently fast update will only be used to flip surface
 * address.
 *
 * UPDATE_TYPE_MED is used for slower updates which require significant hw
 * re-programming however do not affect bandwidth consumption or clock
 * requirements. At present, this is the level at which front end updates
 * that do not require us to run bw_calcs happen. These are in/out transfer func
 * updates, viewport offset changes, recout size changes and pixel
depth changes.
 * This update can be done at ISR, but we want to minimize how often
this happens.
 *
 * UPDATE_TYPE_FULL is slow. Really slow. This requires us to recalculate our
 * bandwidth and clocks, possibly rearrange some pipes and reprogram
anything front
 * end related. Any time viewport dimensions, recout dimensions,
scaling ratios or
 * gamma need to be adjusted or pipe needs to be turned on (or
disconnected) we do
 * a full update. This cannot be done at ISR level and should be a rare event.
 * Unless someone is stress testing mpo enter/exit, playing with
colour or adjusting
 * underscan we don't expect to see this call at all.
 */
enum surface_update_type  
UPDATE_TYPE_FAST, /* super fast, safe to execute in isr */
UPDATE_TYPE_MED,  /* ISR safe, most of programming needed, no bw/clk change*/
UPDATE_TYPE_FULL, /* may need to shuffle resources */
 ;

Using tools: Observe the current state, validate your findings, continue improvements. 14) Use AMD tools to check hardware state and driver programming: help on understanding your driver settings and checking the behavior when changing those settings.
  • DC Visual confirmation: Check multiple planes and pipe split policy.
  • DTN logs: Check display hardware state, including rotation, size, format, underflow, blocks in use, color block values, etc.
  • UMR: Check ASIC info, register values, KMS state - links and elements (framebuffers, planes, CRTCs, connectors). Source: UMR project documentation
15) Use generic DRM/KMS tools:
  • IGT test tools: Use generic KMS tests or develop your own to isolate the issue in the kernel space. Compare results across different GPU vendors to understand their implementations and find potential solutions. Here AMD also has specific IGT tests for its GPUs that is expect to work without failures on any AMD GPU. You can check results of HW-specific tests using different display hardware families or you can compare expected differences between the generic workflow and AMD workflow.
  • drm_info: This tool summarizes the current state of a display driver (capabilities, properties and formats) per element of the DRM/KMS workflow. Output can be helpful when reporting bugs.

Don t give up! Debugging issues in the AMD display driver can be challenging, but by following these tips and leveraging available resources, you can significantly improve your chances of success. Worth mentioning: This blog post builds upon my talk, I m not an AMD expert, but presented at the 2022 XDC. It shares guidelines that helped me debug AMD display issues as an external developer of the driver. Open Source Display Driver: The Linux kernel/AMD display driver is open source, allowing you to actively contribute by addressing issues listed in the official tracker. Tackling existing issues or resolving your own can be a rewarding learning experience and a valuable contribution to the community. Additionally, the tracker serves as a valuable resource for finding similar bugs, troubleshooting tips, and suggestions from AMD developers. Finally, it s a platform for seeking help when needed. Remember, contributing to the open source community through issue resolution and collaboration is mutually beneficial for everyone involved.

12 December 2023

Raju Devidas: Nextcloud AIO install with docker-compose and nginx reverse proxy

Nextcloud AIO install with docker-compose and nginx reverse proxyNextcloud is a popular self-hosted solution for file sync and share as well as cloud apps such as document editing, chat and talk, calendar, photo gallery etc. This guide will walk you through setting up Nextcloud AIO using Docker Compose. This blog post would not be possible without immense help from Sahil Dhiman a.k.a. sahilisterThere are various ways in which the installation could be done, in our setup here are the pre-requisites.

Step 1 : The docker-compose file for nextcloud AIOThe original compose.yml file is present in nextcloud AIO&aposs git repo here . By taking a reference of that file, we have own compose.yml here.
services:
  nextcloud-aio-mastercontainer:
    image: nextcloud/all-in-one:latest
    init: true
    restart: always
    container_name: nextcloud-aio-mastercontainer # This line is not allowed to be changed as otherwise AIO will not work correctly
    volumes:
      - nextcloud_aio_mastercontainer:/mnt/docker-aio-config # This line is not allowed to be changed as otherwise the built-in backup solution will not work
      - /var/run/docker.sock:/var/run/docker.sock:ro # May be changed on macOS, Windows or docker rootless. See the applicable documentation. If adjusting, don&apost forget to also set &aposWATCHTOWER_DOCKER_SOCKET_PATH&apos!
    ports:
      - 8080:8080
    environment: # Is needed when using any of the options below
      # - AIO_DISABLE_BACKUP_SECTION=false # Setting this to true allows to hide the backup section in the AIO interface. See https://github.com/nextcloud/all-in-one#how-to-disable-the-backup-section
      - APACHE_PORT=32323 # Is needed when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
      - APACHE_IP_BINDING=127.0.0.1 # Should be set when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else) that is running on the same host. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
      # - BORG_RETENTION_POLICY=--keep-within=7d --keep-weekly=4 --keep-monthly=6 # Allows to adjust borgs retention policy. See https://github.com/nextcloud/all-in-one#how-to-adjust-borgs-retention-policy
      # - COLLABORA_SECCOMP_DISABLED=false # Setting this to true allows to disable Collabora&aposs Seccomp feature. See https://github.com/nextcloud/all-in-one#how-to-disable-collaboras-seccomp-feature
      - NEXTCLOUD_DATADIR=/opt/docker/cloud.raju.dev/nextcloud # Allows to set the host directory for Nextcloud&aposs datadir.   Warning: do not set or adjust this value after the initial Nextcloud installation is done! See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir
      # - NEXTCLOUD_MOUNT=/mnt/ # Allows the Nextcloud container to access the chosen directory on the host. See https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host
      # - NEXTCLOUD_UPLOAD_LIMIT=10G # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud
      # - NEXTCLOUD_MAX_TIME=3600 # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud
      # - NEXTCLOUD_MEMORY_LIMIT=512M # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-php-memory-limit-for-nextcloud
      # - NEXTCLOUD_TRUSTED_CACERTS_DIR=/path/to/my/cacerts # CA certificates in this directory will be trusted by the OS of the nexcloud container (Useful e.g. for LDAPS) See See https://github.com/nextcloud/all-in-one#how-to-trust-user-defined-certification-authorities-ca
      # - NEXTCLOUD_STARTUP_APPS=deck twofactor_totp tasks calendar contacts notes # Allows to modify the Nextcloud apps that are installed on starting AIO the first time. See https://github.com/nextcloud/all-in-one#how-to-change-the-nextcloud-apps-that-are-installed-on-the-first-startup
      # - NEXTCLOUD_ADDITIONAL_APKS=imagemagick # This allows to add additional packages to the Nextcloud container permanently. Default is imagemagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-os-packages-permanently-to-the-nextcloud-container
      # - NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=imagick # This allows to add additional php extensions to the Nextcloud container permanently. Default is imagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-php-extensions-permanently-to-the-nextcloud-container
      # - NEXTCLOUD_ENABLE_DRI_DEVICE=true # This allows to enable the /dev/dri device in the Nextcloud container.   Warning: this only works if the &apos/dev/dri&apos device is present on the host! If it should not exist on your host, don&apost set this to true as otherwise the Nextcloud container will fail to start! See https://github.com/nextcloud/all-in-one#how-to-enable-hardware-transcoding-for-nextcloud
      # - NEXTCLOUD_KEEP_DISABLED_APPS=false # Setting this to true will keep Nextcloud apps that are disabled in the AIO interface and not uninstall them if they should be installed. See https://github.com/nextcloud/all-in-one#how-to-keep-disabled-apps
      # - TALK_PORT=3478 # This allows to adjust the port that the talk container is using. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-talk-port
      # - WATCHTOWER_DOCKER_SOCKET_PATH=/var/run/docker.sock # Needs to be specified if the docker socket on the host is not located in the default &apos/var/run/docker.sock&apos. Otherwise mastercontainer updates will fail. For macos it needs to be &apos/var/run/docker.sock&apos
    # networks: # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
      # - nextcloud-aio # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
      # - SKIP_DOMAIN_VALIDATION=true
    # # Uncomment the following line when using SELinux
    # security_opt: ["label:disable"]
volumes: # If you want to store the data on a different drive, see https://github.com/nextcloud/all-in-one#how-to-store-the-filesinstallation-on-a-separate-drive
  nextcloud_aio_mastercontainer:
    name: nextcloud_aio_mastercontainer # This line is not allowed to be changed as otherwise the built-in backup solution will not work
I have not removed many of the commented options in the compose file, for a possibility of me using them in the future.If you want a smaller cleaner compose with the extra options, you can refer to
services:
  nextcloud-aio-mastercontainer:
    image: nextcloud/all-in-one:latest
    init: true
    restart: always
    container_name: nextcloud-aio-mastercontainer
    volumes:
      - nextcloud_aio_mastercontainer:/mnt/docker-aio-config
      - /var/run/docker.sock:/var/run/docker.sock:ro
    ports:
      - 8080:8080
    environment:
      - APACHE_PORT=32323
      - APACHE_IP_BINDING=127.0.0.1
      - NEXTCLOUD_DATADIR=/opt/docker/nextcloud
volumes:
  nextcloud_aio_mastercontainer:
    name: nextcloud_aio_mastercontainer
I am using a separate directory to store nextcloud data. As per nextcloud documentation you should be using a separate partition if you want to use this feature, however I did not have that option on my server, so I used a separate directory instead. Also we use a custom port on which nextcloud listens for operations, we have set it up as 32323 above, but you can use any in the permissible port range. The 8080 port is used the setup the AIO management interface. Both 8080 and the APACHE_PORT do not need to be open on the host machine, as we will be using reverse proxy setup with nginx to direct requests. once you have your preferred compose.yml file, you can start the containers using
$ docker-compose -f compose.yml up -d 
Creating network "clouddev_default" with the default driver
Creating volume "nextcloud_aio_mastercontainer" with default driver
Creating nextcloud-aio-mastercontainer ... done
once your container&aposs are running, we can do the nginx setup.

Step 2: Configuring nginx reverse proxy for our domain on host. A reference nginx configuration for nextcloud AIO is given in the nextcloud git repository here . You can modify the configuration file according to your needs and setup. Here is configuration that we are using

map $http_upgrade $connection_upgrade  
    default upgrade;
    &apos&apos close;
 
server  
    listen 80;
    #listen [::]:80;            # comment to disable IPv6
    if ($scheme = "http")  
        return 301 https://$host$request_uri;
     
    listen 443 ssl http2;      # for nginx versions below v1.25.1
    #listen [::]:443 ssl http2; # for nginx versions below v1.25.1 - comment to disable IPv6
    # listen 443 ssl;      # for nginx v1.25.1+
    # listen [::]:443 ssl; # for nginx v1.25.1+ - keep comment to disable IPv6
    # http2 on;                                 # uncomment to enable HTTP/2        - supported on nginx v1.25.1+
    # http3 on;                                 # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
    # quic_retry on;                            # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
    # add_header Alt-Svc &aposh3=":443"; ma=86400&apos; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
    # listen 443 quic reuseport;       # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport
    # listen [::]:443 quic reuseport;  # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport - keep comment to disable IPv6
    server_name cloud.example.com;
    location /  
        proxy_pass http://127.0.0.1:32323$request_uri;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Scheme $scheme;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Accept-Encoding "";
        proxy_set_header Host $host;
    
        client_body_buffer_size 512k;
        proxy_read_timeout 86400s;
        client_max_body_size 0;
        # Websocket
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
     
    ssl_certificate /etc/letsencrypt/live/cloud.example.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/cloud.example.com/privkey.pem; # managed by Certbot
    ssl_session_timeout 1d;
    ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
    ssl_session_tickets off;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
    ssl_prefer_server_ciphers on;
    # Optional settings:
    # OCSP stapling
    # ssl_stapling on;
    # ssl_stapling_verify on;
    # ssl_trusted_certificate /etc/letsencrypt/live/<your-nc-domain>/chain.pem;
    # replace with the IP address of your resolver
    # resolver 127.0.0.1; # needed for oscp stapling: e.g. use 94.140.15.15 for adguard / 1.1.1.1 for cloudflared or 8.8.8.8 for google - you can use the same nameserver as listed in your /etc/resolv.conf file
 
Please note that you need to have valid SSL certificates for your domain for this configuration to work. Steps on getting valid SSL certificates for your domain are beyond the scope of this article. You can give a web search on getting SSL certificates with letsencrypt and you will get several resources on that, or may write a blog post on it separately in the future.once your configuration for nginx is done, you can test the nginx configuration using
$ sudo nginx -t 
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
and then reload nginx with
$ sudo nginx -s reload

Step 3: Setup of Nextcloud AIO from the browser.To setup nextcloud AIO, we need to access it using the web browser on URL of our domain.tld:8080, however we do not want to open the 8080 port publicly to do this, so to complete the setup, here is a neat hack from sahilister
ssh -L 8080:127.0.0.1:8080 username:<server-ip>
you can bind the 8080 port of your server to the 8080 of your localhost using Unix socket forwarding over SSH.The port forwarding only last for the duration of your SSH session, if the SSH session breaks, your port forwarding will to. So, once you have the port forwarded, you can open the nextcloud AIO instance in your web browser at 127.0.0.1:8080
Nextcloud AIO install with docker-compose and nginx reverse proxy
you will get this error because you are trying to access a page on localhost over HTTPS. You can click on advanced and then continue to proceed to the next page. Your data is encrypted over SSH for this session as we are binding the port over SSH. Depending on your choice of browser, the above page might look different.once you have proceeded, the nextcloud AIO interface will open and will look something like this.
Nextcloud AIO install with docker-compose and nginx reverse proxynextcloud AIO initial screen with capsicums as password
It will show an auto generated passphrase, you need to save this passphrase and make sure to not loose it. For the purposes of security, I have masked the passwords with capsicums. once you have noted down your password, you can proceed to the Nextcloud AIO login, enter your password and then login. After login you will be greeted with a screen like this.
Nextcloud AIO install with docker-compose and nginx reverse proxy
now you can put the domain that you want to use in the Submit domain field. Once the domain check is done, you will proceed to the next step and see another screen like this
Nextcloud AIO install with docker-compose and nginx reverse proxy
here you can select any optional containers for the features that you might want. IMPORTANT: Please make sure to also change the time zone at the bottom of the page according to the time zone you wish to operate in.
Nextcloud AIO install with docker-compose and nginx reverse proxy
The timezone setup is also important because the data base will get initialized according to the set time zone. This could result in wrong initialization of database and you ending up in a startup loop for nextcloud. I faced this issue and could only resolve it after getting help from sahilister . Once you are done changing the timezone, and selecting any additional features you want, you can click on Download and start the containersIt will take some time for this process to finish, take a break and look at the farthest object in your room and take a sip of water. Once you are done, and the process has finished you will see a page similar to the following one.
Nextcloud AIO install with docker-compose and nginx reverse proxy
wait patiently for everything to turn green.
Nextcloud AIO install with docker-compose and nginx reverse proxy
once all the containers have started properly, you can open the nextcloud login interface on your configured domain, the initial login details are auto generated as you can see from the above screenshot. Again you will see a password that you need to note down or save to enter the nextcloud interface. Capsicums will not work as passwords. I have masked the auto generated passwords using capsicums.Now you can click on Open your Nextcloud button or go to your configured domain to access the login screen.
Nextcloud AIO install with docker-compose and nginx reverse proxy
You can use the login details from the previous step to login to the administrator account of your Nextcloud instance. There you have it, your very own cloud!

Additional Notes:

How to properly reset Nextcloud setup?While following the above steps, or while following steps from some other tutorial, you may have made a mistake, and want to start everything again from scratch. The instructions for it are present in the Nextcloud documentation here . Here is the TLDR for a docker-compose setup. These steps will delete all data, do not use these steps on an existing nextcloud setup unless you know what you are doing.
  • Stop your master container.
docker-compose -f compose.yml down -v
The above command will also remove the volume associated with the master container
  • Stop all the child containers that has been started by the master container.
docker stop nextcloud-aio-apache nextcloud-aio-notify-push nextcloud-aio-nextcloud nextcloud-aio-imaginary nextcloud-aio-fulltextsearch nextcloud-aio-redis nextcloud-aio-database nextcloud-aio-talk nextcloud-aio-collabora
  • Remove all the child containers that has been started by the master container
docker rm nextcloud-aio-apache nextcloud-aio-notify-push nextcloud-aio-nextcloud nextcloud-aio-imaginary nextcloud-aio-fulltextsearch nextcloud-aio-redis nextcloud-aio-database nextcloud-aio-talk nextcloud-aio-collabora
  • If you also wish to remove all images associated with nextcloud you can do it with
docker rmi $(docker images --filter "reference=nextcloud/*" -q)
  • remove all volumes associated with child containers
docker volume rm <volume-name>
  • remove the network associated with nextcloud
docker network rm nextcloud-aio

Additional references.
  1. Nextcloud Github
  2. Nextcloud reverse proxy documentation
  3. Nextcloud Administration Guide
  4. Nextcloud User Manual
  5. Nextcloud Developer&aposs manual

9 December 2023

Simon Josefsson: Classic McEliece goes to IETF and OpenSSH

My earlier work on Streamlined NTRU Prime has been progressing along. The IETF document on sntrup761 in SSH has passed several process points. GnuPG s libgcrypt has added support for sntrup761. The libssh support for sntrup761 is working, but the merge request is stuck mostly due to lack of time to debug why the regression test suite sporadically errors out in non-sntrup761 related parts with the patch. The foundation for lattice-based post-quantum algorithms has some uncertainty around it, and I have felt that there is more to the post-quantum story than adding sntrup761 to implementations. Classic McEliece has been mentioned to me a couple of times, and I took some time to learn it and did a cut n paste job of the proposed ISO standard and published draft-josefsson-mceliece in the IETF to make the algorithm easily available to the IETF community. A high-quality implementation of Classic McEliece has been published as libmceliece and I ve been supporting the work of Jan Moj to package libmceliece for Debian, alas it has been stuck in the ftp-master NEW queue for manual review for over two months. The pre-dependencies librandombytes and libcpucycles are available in Debian already. All that text writing and packaging work set the scene to write some code. When I added support for sntrup761 in libssh, I became familiar with the OpenSSH code base, so it was natural to return to OpenSSH to experiment with a new SSH KEX for Classic McEliece. DJB suggested to pick mceliece6688128 and combine it with the existing X25519+sntrup761 or with plain X25519. While a three-algorithm hybrid between X25519, sntrup761 and mceliece6688128 would be a simple drop-in for those that don t want to lose the benefits offered by sntrup761, I decided to start the journey on a pure combination of X25519 with mceliece6688128. The key combiner in sntrup761x25519 is a simple SHA512 call and the only good I can say about that is that it is simple to describe and implement, and doesn t raise too many questions since it is already deployed. After procrastinating coding for months, once I sat down to work it only took a couple of hours until I had a successful Classic McEliece SSH connection. I suppose my brain had sorted everything in background before I started. To reproduce it, please try the following in a Debian testing environment (I use podman to get a clean environment).
# podman run -it --rm debian:testing-slim
apt update
apt dist-upgrade -y
apt install -y wget python3 librandombytes-dev libcpucycles-dev gcc make git autoconf libz-dev libssl-dev
cd ~
wget -q -O- https://lib.mceliece.org/libmceliece-20230612.tar.gz   tar xfz -
cd libmceliece-20230612/
./configure
make install
ldconfig
cd ..
git clone https://gitlab.com/jas/openssh-portable
cd openssh-portable
git checkout jas/mceliece
autoreconf
./configure # verify 'libmceliece support: yes'
make # CC="cc -DDEBUG_KEX=1 -DDEBUG_KEXDH=1 -DDEBUG_KEXECDH=1"
You should now have a working SSH client and server that supports Classic McEliece! Verify support by running ./ssh -Q kex and it should mention mceliece6688128x25519-sha512@openssh.com. To have it print plenty of debug outputs, you may remove the # character on the final line, but don t use such a build in production. You can test it as follows:
./ssh-keygen -A # writes to /usr/local/etc/ssh_host_...
# setup public-key based login by running the following:
./ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""
cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
adduser --system sshd
mkdir /var/empty
while true; do $PWD/sshd -p 2222 -f /dev/null; done &
./ssh -v -p 2222 localhost -oKexAlgorithms=mceliece6688128x25519-sha512@openssh.com date
On the client you should see output like this:
OpenSSH_9.5p1, OpenSSL 3.0.11 19 Sep 2023
...
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: mceliece6688128x25519-sha512@openssh.com
debug1: kex: host key algorithm: ssh-ed25519
debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: SSH2_MSG_KEX_ECDH_REPLY received
debug1: Server host key: ssh-ed25519 SHA256:YognhWY7+399J+/V8eAQWmM3UFDLT0dkmoj3pIJ0zXs
...
debug1: Host '[localhost]:2222' is known and matches the ED25519 host key.
debug1: Found key in /root/.ssh/known_hosts:1
debug1: rekey out after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey in after 134217728 blocks
...
debug1: Sending command: date
debug1: pledge: fork
debug1: permanently_set_uid: 0/0
Environment:
  USER=root
  LOGNAME=root
  HOME=/root
  PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
  MAIL=/var/mail/root
  SHELL=/bin/bash
  SSH_CLIENT=::1 46894 2222
  SSH_CONNECTION=::1 46894 ::1 2222
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: client_input_channel_req: channel 0 rtype eow@openssh.com reply 0
Sat Dec  9 22:22:40 UTC 2023
debug1: channel 0: free: client-session, nchannels 1
Transferred: sent 1048044, received 3500 bytes, in 0.0 seconds
Bytes per second: sent 23388935.4, received 78108.6
debug1: Exit status 0
Notice the kex: algorithm: mceliece6688128x25519-sha512@openssh.com output. How about network bandwidth usage? Below is a comparison of a complete SSH client connection such as the one above that log in and print date and logs out. Plain X25519 is around 7kb, X25519 with sntrup761 is around 9kb, and mceliece6688128 with X25519 is around 1MB. Yes, Classic McEliece has large keys, but for many environments, 1MB of data for the session establishment will barely be noticeable.
./ssh -v -p 2222 localhost -oKexAlgorithms=curve25519-sha256 date 2>&1   grep ^Transferred
Transferred: sent 3028, received 3612 bytes, in 0.0 seconds
./ssh -v -p 2222 localhost -oKexAlgorithms=sntrup761x25519-sha512@openssh.com date 2>&1   grep ^Transferred
Transferred: sent 4212, received 4596 bytes, in 0.0 seconds
./ssh -v -p 2222 localhost -oKexAlgorithms=mceliece6688128x25519-sha512@openssh.com date 2>&1   grep ^Transferred
Transferred: sent 1048044, received 3764 bytes, in 0.0 seconds
So how about session establishment time?
date; i=0; while test $i -le 100; do ./ssh -v -p 2222 localhost -oKexAlgorithms=curve25519-sha256 date > /dev/null 2>&1; i= expr $i + 1 ; done; date
Sat Dec  9 22:39:19 UTC 2023
Sat Dec  9 22:39:25 UTC 2023
# 6 seconds
date; i=0; while test $i -le 100; do ./ssh -v -p 2222 localhost -oKexAlgorithms=sntrup761x25519-sha512@openssh.com date > /dev/null 2>&1; i= expr $i + 1 ; done; date
Sat Dec  9 22:39:29 UTC 2023
Sat Dec  9 22:39:38 UTC 2023
# 9 seconds
date; i=0; while test $i -le 100; do ./ssh -v -p 2222 localhost -oKexAlgorithms=mceliece6688128x25519-sha512@openssh.com date > /dev/null 2>&1; i= expr $i + 1 ; done; date
Sat Dec  9 22:39:55 UTC 2023
Sat Dec  9 22:40:07 UTC 2023
# 12 seconds
I never noticed adding sntrup761, so I m pretty sure I wouldn t notice this increase either. This is all running on my laptop that runs Trisquel so take it with a grain of salt but at least the magnitude is clear. Future work items include: Happy post-quantum SSH ing! Update: Changing the mceliece6688128_keypair call to mceliece6688128f_keypair (i.e., using the fully compatible f-variant) results in McEliece being just as fast as sntrup761 on my machine. Update 2023-12-26: An initial IETF document draft-josefsson-ssh-mceliece-00 published.

5 December 2023

Matthew Garrett: Why does Gnome fingerprint unlock not unlock the keyring?

There's a decent number of laptops with fingerprint readers that are supported by Linux, and Gnome has some nice integration to make use of that for authentication purposes. But if you log in with a fingerprint, the moment you start any app that wants to access stored passwords you'll get a prompt asking you to type in your password, which feels like it somewhat defeats the point. Mac users don't have this problem - authenticate with TouchID and all your passwords are available after login. Why the difference?

Fingerprint detection can be done in two primary ways. The first is that a fingerprint reader is effectively just a scanner - it passes a graphical representation of the fingerprint back to the OS and the OS decides whether or not it matches an enrolled finger. The second is for the fingerprint reader to make that determination itself, either storing a set of trusted fingerprints in its own storage or supporting being passed a set of encrypted images to compare against. Fprint supports both of these, but note that in both cases all that we get at the end of the day is a statement of "The fingerprint matched" or "The fingerprint didn't match" - we can't associate anything else with that.

Apple's solution involves wiring the fingerprint reader to a secure enclave, an independently running security chip that can store encrypted secrets or keys and only release them under pre-defined circumstances. Rather than the fingerprint reader providing information directly to the OS, it provides it to the secure enclave. If the fingerprint matches, the secure enclave can then provide some otherwise secret material to the OS. Critically, if the fingerprint doesn't match, the enclave will never release this material.

And that's the difference. When you perform TouchID authentication, the secure enclave can decide to release a secret that can be used to decrypt your keyring. We can't easily do this under Linux because we don't have an interface to store those secrets. The secret material can't just be stored on disk - that would allow anyone who had access to the disk to use that material to decrypt the keyring and get access to the passwords, defeating the object. We can't use the TPM because there's no secure communications channel between the fingerprint reader and the TPM, so we can't configure the TPM to release secrets only if an associated fingerprint is provided.

So the simple answer is that fingerprint unlock doesn't unlock the keyring because there's currently no secure way to do that. It's not intransigence on the part of the developers or a conspiracy to make life more annoying. It'd be great to fix it, but I don't see an easy way to do so at the moment.

comment count unavailable comments

4 December 2023

Ian Jackson: Don t use apt-get source; use dgit

tl;dr: If you are a Debian user who knows git, don t work with Debian source packages. Don t use apt source, or dpkg-source. Instead, use dgit and work in git. Also, don t use: VCS links on official Debian web pages, debcheckout, or Debian s (semi-)official gitlab, Salsa. These are suitable for Debian experts only; for most people they can be beartraps. Instead, use dgit. > Struggling with Debian source packages? A friend of mine recently asked for help on IRC. They re an experienced Debian administrator and user, and were trying to: make a change to a Debian package; build and install and run binary packages from it; and record that change for their future self, and their colleagues. They ended up trying to comprehend quilt. quilt is an ancient utility for managing sets of source code patches, from well before the era of modern version control. It has many strange behaviours and footguns. Debian s ancient and obsolete tarballs-and-patches source package format (which I designed the initial version of in 1993) nowadays uses quilt, at least for most packages. You don t want to deal with any of this nonsense. You don t want to learn quilt, and suffer its misbehaviours. You don t want to learn about Debian source packages and wrestle dpkg-source. Happily, you don t need to. Just use dgit One of dgit s main objectives is to minimise the amount of Debian craziness you need to learn. dgit aims to empower you to make changes to the software you re running, conveniently and with a minimum of fuss. You can use dgit to get the source code to a Debian package, as a git tree, with dgit clone (and dgit fetch). The git tree can be made into a binary package directly. The only things you really need to know are:
  1. By default dgit fetches from Debian unstable, the main work-in-progress branch. You may want something like dgit clone PACKAGE bookworm,-security (yes, with a comma).
  2. You probably want to edit debian/changelog to make your packages have a different version number.
  3. To build binaries, run dpkg-buildpackage -uc -b.
  4. Debian package builds are often disastrously messsy: builds might modify source files; and the official debian/rules clean can be inadequate, or crazy. Always commit before building, and use git clean and git reset --hard instead of running clean rules from the package.
Don t try to make a Debian source package. (Don t read the dpkg-source manual!) Instead, to preserve and share your work, use the git branch. dgit pull or dgit fetch can be used to get updates. There is a more comprehensive tutorial, with example runes, in the dgit-user(7) manpage. (There is of course complete reference documentation, but you don t need to bother reading it.) Objections But I don t want to learn yet another tool One of dgit s main goals is to save people from learning things you don t need to. It aims to be straightforward, convenient, and (so far as Debian permits) unsurprising. So: don t learn dgit. Just run it and it will be fine :-). Shouldn t I be using official Debian git repos? Absolutely not. Unless you are a Debian expert, these can be terrible beartraps. One possible outcome is that you might build an apparently working program but without the security patches. Yikes! I discussed this in more detail in 2021 in another blog post plugging dgit. Gosh, is Debian really this bad? Yes. On behalf of the Debian Project, I apologise. Debian is a very conservative institution. Change usually comes very slowly. (And when rapid or radical change has been forced through, the results haven t always been pretty, either technically or socially.) Sadly this means that sometimes much needed change can take a very long time, if it happens at all. But this tendency also provides the stability and reliability that people have come to rely on Debian for. I m a Debian maintainer. You tell me dgit is something totally different! dgit is, in fact, a general bidirectional gateway between the Debian archive and git. So yes, dgit is also a tool for Debian uploaders. You should use it to do your uploads, whenever you can. It s more convenient and more reliable than git-buildpackage and dput runes, and produces better output for users. You too can start to forget how to deal with source packages! A full treatment of this is beyond the scope of this blog post.

comment count unavailable comments

3 December 2023

Ben Hutchings: FOSS activity in September 2023

24 November 2023

Jonathan Dowland: Dockerfile ARG footgun

This week I stumbled across a footgun in the Dockerfile/Containerfile ARG instruction. ARG is used to define a build-time variable, possibly with a default value embedded in the Dockerfile, which can be overridden at build-time (by passing --build-arg). The value of a variable FOO is interpolated into any following instructions that include the token $FOO. This behaves a little similar to the existing instruction ENV, which, for RUN instructions at least, can also be interpolated, but can't (I don't think) be set at build time, and bleeds through to the resulting image metadata. ENV has been around longer, and the documentation indicates that, when both are present, ENV takes precedence. This fits with my mental model of how things should work, but, what the documentation does not make clear is, the ENV doesn't need to have been defined in the same Dockerfile: environment variables inherited from the base image also override ARGs. To me this is unexpected and far less sensible: in effect, if you are building a layered image and want to use ARG, you have to be fairly sure that your base image doesn't define an ENV of the same name, either now or in the future, unless you're happy for their value to take precedence. In our case, we broke a downstream build process by defining a new environment variable USER in our image. To defend against the unexpected, I'd recommend using somewhat unique ARG names: perhaps prefix something unusual and unlikely to be shadowed. Or don't use ARG at all, and push that kind of logic up the stack to a Dockerfile pre-processor like CeKit.

20 November 2023

Russ Allbery: Review: The Exiled Fleet

Review: The Exiled Fleet, by J.S. Dewes
Series: Divide #2
Publisher: Tor
Copyright: 2021
ISBN: 1-250-23635-5
Format: Kindle
Pages: 421
The Exiled Fleet is far-future interstellar military SF. It is a direct sequel to The Last Watch. You don't want to start here. The Last Watch took a while to get going, but it ended with some fascinating world-building and a suitably enormous threat. I was hoping Dewes would carry that momentum into the second book. I was disappointed; instead, The Exiled Fleet starts with interpersonal angst and wallowing and takes an annoyingly long time to build up narrative tension again. The world-building of the first book looked outward, towards aliens and strange technology and stranger physics, while setting up contributing problems on the home front. The Exiled Fleet pivots inwards, both in terms of world-building and in terms of character introspection. Neither of those worked as well for me. There's nothing wrong with the revelations here about human power structures and the politics that the Sentinels have been missing at the edge of space, but it also felt like a classic human autocracy without much new to offer in either wee thinky bits or plot structure. We knew most of shape from the start of the first book: Cavalon's grandfather is evil, human society is run as an oligarchy, and everything is trending authoritarian. Once the action started, I was entertained but not gripped the way that I was when reading The Last Watch. Dewes makes a brief attempt to tap into the morally complex question of the military serving as a brake on tyranny, but then does very little with it. Instead, everything is excessively personal, turning the political into less of a confrontation of ideologies or ethics and more a story of family abuse and rebellion. There is even more psychodrama in this book than there was in the previous book. I found it exhausting. Rake is barely functional after the events of the previous book and pushing herself way too hard at the start of this one. Cavalon regresses considerably and starts falling apart again. There's a lot of moping, a lot of angst, and a lot of characters berating themselves and occasionally each other. It was annoying enough that I took a couple of weeks break from this book in the middle before I could work up the enthusiasm to finish it. Some of this is personal preference. My favorite type of story is competence porn: details about something esoteric and satisfyingly complex, a challenge to overcome, and a main character who deploys their expertise to overcome that challenge in a way that shows they generally have their shit together. I can enjoy other types of stories, but that's the story I'll keep reaching for. Other people prefer stories about fuck-ups and walking disasters, people who barely pull together enough to survive the plot (or sometimes not even that). There's nothing wrong with that, and neither approach is right or wrong, but my tolerance for that story is usually lot lower. I think Dewes is heading towards the type of story in which dysfunctional characters compensate for each other's flaws in order to keep each other going, and intellectually I can see the appeal. But it's not my thing, and when the main characters are falling apart and the supporting characters project considerably more competence, I wish the story had different protagonists. It didn't help that this is in theory military SF, but Dewes does not seem to want to deploy any of the support framework of the military to address any of her characters' problems. This book is a lot of Rake and Cavalon dragging each other through emotional turmoil while coming to terms with Cavalon's family. I liked their dynamic in the first book when it felt more like Rake showing leadership skills. Here, it turns into something closer to found family in ways that seemed wildly inconsistent with the military structure, and while I'm normally not one to defend hierarchical discipline, I felt like Rake threw out the only structure she had to handle the thousands of other people under her command and started winging it based on personal friendship. If this were a small commercial crew, sure, fine, but Rake has a personal command responsibility that she obsessively angsts about and yet keeps abandoning. I realize this is probably another way to complain that I wanted competence porn and got barely-functional fuck-ups. The best parts of this series are the strange technologies and the aliens, and they are again the best part of this book. There was a truly great moment involving Viator technology that I found utterly delightful, and there was an intriguing setup for future books that caught my attention. Unfortunately, there were also a lot of deus ex machina solutions to problems, both from convenient undisclosed character backstories and from alien tech. I felt like the characters had to work satisfyingly hard for their victories in the first book; here, I felt like Dewes kept having issues with her characters being at point A and her plot at point B and pulling some rabbit out of the hat to make the plot work. This unfortunately undermined the cool factor of the world-building by making its plot device aspects a bit too obvious. This series also turns out not to be a duology (I have no idea why I thought it would be). By the end of The Exiled Fleet, none of the major political or world-building problems have been resolved. At best, the characters are in a more stable space to start being proactive. I'm cautiously optimistic that could mean the series would turn into the type of story I was hoping for, but I'm worried that Dewes is interested in writing a different type of character story than I am interested in reading. Hopefully there will be some clues in the synopsis of the (as yet unannounced) third book. I thought The Last Watch had some first-novel problems but was worth reading. I am much more reluctant to recommend The Exiled Fleet, or the series as a whole given that it is incomplete. Unless you like dysfunctional characters, proceed with caution. Rating: 5 out of 10

11 November 2023

Matthias Klumpp: AppStream 1.0 released!

Today, 12 years after the meeting where AppStream was first discussed and 11 years after I released a prototype implementation I am excited to announce AppStream 1.0!    Check it out on GitHub, or get the release tarball or read the documentation or release notes!

Some nostalgic memories I was not in the original AppStream meeting, since in 2011 I was extremely busy with finals preparations and ball organization in high school, but I still vividly remember sitting at school in the students lounge during a break and trying to catch the really choppy live stream from the meeting on my borrowed laptop (a futile exercise, I watched parts of the blurry recording later). I was extremely passionate about getting software deployment to work better on Linux and to improve the overall user experience, and spent many hours on the PackageKit IRC channel discussing things with many amazing people like Richard Hughes, Daniel Nicoletti, Sebastian Heinlein and others. At the time I was writing a software deployment tool called Listaller this was before Linux containers were a thing, and building it was very tough due to technical and personal limitations (I had just learned C!). Then in university, when I intended to recreate this tool, but for real and better this time as a new project called Limba, I needed a way to provide metadata for it, and AppStream fit right in! Meanwhile, Richard Hughes was tackling the UI side of things while creating GNOME Software and needed a solution as well. So I implemented a prototype and together we pretty much reshaped the early specification from the original meeting into what would become modern AppStream. Back then I saw AppStream as a necessary side-project for my actual project, and didn t even consider me as the maintainer of it for quite a while (I hadn t been at the meeting afterall). All those years ago I had no idea that ultimately I was developing AppStream not for Limba, but for a new thing that would show up later, with an even more modern design called Flatpak. I also had no idea how incredibly complex AppStream would become and how many features it would have and how much more maintenance work it would be and also not how ubiquitous it would become. The modern Linux desktop uses AppStream everywhere now, it is supported by all major distributions, used by Flatpak for metadata, used for firmware metadata via Richard s fwupd/LVFS, runs on every Steam Deck, can be found in cars and possibly many places I do not know yet.

What is new in 1.0?

API breaks The most important thing that s new with the 1.0 release is a bunch of incompatible changes. For the shared libraries, all deprecated API elements have been removed and a bunch of other changes have been made to improve the overall API and especially make it more binding-friendly. That doesn t mean that the API is completely new and nothing looks like before though, when possible the previous API design was kept and some changes that would have been too disruptive have not been made. Regardless of that, you will have to port your AppStream-using applications. For some larger ones I already submitted patches to build with both AppStream versions, the 0.16.x stable series as well as 1.0+. For the XML specification, some older compatibility for XML that had no or very few users has been removed as well. This affects for example release elements that reference downloadable data without an artifact block, which has not been supported for a while. For all of these, I checked to remove only things that had close to no users and that were a significant maintenance burden. So as a rule of thumb: If your XML validated with no warnings with the 0.16.x branch of AppStream, it will still be 100% valid with the 1.0 release. Another notable change is that the generated output of AppStream 1.0 will always be 1.0 compliant, you can not make it generate data for versions below that (this greatly reduced the maintenance cost of the project).

Developer element For a long time, you could set the developer name using the top-level developer_name tag. With AppStream 1.0, this is changed a bit. There is now a developer tag with a name child (that can be translated unless the translate="no" attribute is set on it). This allows future extensibility, and also allows to set a machine-readable id attribute in the developer element. This permits software centers to group software by developer easier, without having to use heuristics. If we decide to extend the developer information per-app in future, this is also now possible. Do not worry though the developer_name tag is also still read, so there is no high pressure to update. The old 0.16.x stable series also has this feature backported, so it can be available everywhere. Check out the developer tag specification for more details.

Scale factor for screenshots Screenshot images can now have a scale attribute, to indicate an (integer) scaling factor to apply. This feature was a breaking change and therefore we could not have it for the longest time, but it is now available. Please wait a bit for AppStream 1.0 to become deployed more widespread though, as using it with older AppStream versions may lead to issues in some cases. Check out the screenshots tag specification for more details.

Screenshot environments It is now possible to indicate the environment a screenshot was recorded in (GNOME, GNOME Dark, KDE Plasma, Windows, etc.) via an environment attribute on the respective screenshot tag. This was also a breaking change, so use it carefully for now! If projects want to, they can use this feature to supply dedicated screenshots depending on the environment the application page is displayed in. Check out the screenshots tag specification for more details.

References tag This is a feature more important for the scientific community and scientific applications. Using the references tag, you can associate the AppStream component with a DOI (Digital object identifier) or provide a link to a CFF file to provide citation information. It also allows to link to other scientific registries. Check out the references tag specification for more details.

Release tags Releases can have tags now, just like components. This is generally not a feature that I expect to be used much, but in certain instances it can become useful with a cooperating software center, for example to tag certain releases as long-term supported versions.

Multi-platform support Thanks to the interest and work of many volunteers, AppStream (mostly) runs on FreeBSD now, a NetBSD port exists, support for macOS was written and a Windows port is on its way! Thank you to everyone working on this

Better compatibility checks For a long time I thought that the AppStream library should just be a thin layer above the XML and that software centers should just implement a lot of the actual logic. This has not been the case for a while, but there was still a lot of complex AppStream features that were hard for software centers to implement and where it makes sense to have one implementation that projects can just use. The validation of component relations is one such thing. This was implemented in 0.16.x as well, but 1.0 vastly improves upon the compatibility checks, so you can now just run as_component_check_relations and retrieve a detailed list of whether the current component will run well on the system. Besides better API for software developers, the appstreamcli utility also has much improved support for relation checks, and I wrote about these changes in a previous post. Check it out! With these changes, I hope this feature will be used much more, and beyond just drivers and firmware.

So much more! The changelog for the 1.0 release is huge, and there are many papercuts resolved and changes made that I did not talk about here, like us using gi-docgen (instead of gtkdoc) now for nice API documentation, or the many improvements that went into better binding support, or better search, or just plain bugfixes.

Outlook I expect the transition to 1.0 to take a bit of time. AppStream has not broken its API for many, many years (since 2016), so a bunch of places need to be touched even if the changes themselves are minor in many cases. In hindsight, I should have also released 1.0 much sooner and it should not have become such a mega-release, but that was mainly due to time constraints. So, what s in it for the future? Contrary to what I thought, AppStream does not really seem to be done and fetature complete at a point, there is always something to improve, and people come up with new usecases all the time. So, expect more of the same in future: Bugfixes, validator improvements, documentation improvements, better tools and the occasional new feature. Onwards to 1.0.1!

5 November 2023

Vincent Bernat: Non-interactive SSH password authentication

SSH offers several forms of authentication, such as passwords and public keys. The latter are considered more secure. However, password authentication remains prevalent, particularly with network equipment.1 A classic solution to avoid typing a password for each connection is sshpass, or its more correct variant passh. Here is a wrapper for Zsh, getting the password from pass, a simple password manager:2
pssh()  
  passh -p <(pass show network/ssh/password   head -1) ssh "$@"
 
compdef pssh=ssh
This approach is a bit brittle as it requires to parse the output of the ssh command to look for a password prompt. Moreover, if no password is required, the password manager is still invoked. Since OpenSSH 8.4, we can use SSH_ASKPASS and SSH_ASKPASS_REQUIRE instead:
ssh()  
  set -o localoptions -o localtraps
  local passname=network/ssh/password
  local helper=$(mktemp)
  trap "command rm -f $helper" EXIT INT
  > $helper <<EOF
#!$SHELL
pass show $passname   head -1
EOF
  chmod u+x $helper
  SSH_ASKPASS=$helper SSH_ASKPASS_REQUIRE=force command ssh "$@"
 
If the password is incorrect, we can display a prompt on the second tentative:
ssh()  
  set -o localoptions -o localtraps
  local passname=network/ssh/password
  local helper=$(mktemp)
  trap "command rm -f $helper" EXIT INT
  > $helper <<EOF
#!$SHELL
if [ -k $helper ]; then
   
    oldtty=\$(stty -g)
    trap 'stty \$oldtty < /dev/tty 2> /dev/null' EXIT INT TERM HUP
    stty -echo
    print "\rpassword: "
    read password
    printf "\n"
    > /dev/tty < /dev/tty
  printf "%s" "\$password"
else
  pass show $passname   head -1
  chmod +t $helper
fi
EOF
  chmod u+x $helper
  SSH_ASKPASS=$helper SSH_ASKPASS_REQUIRE=force command ssh "$@"
 
A possible improvement is to use a different password entry depending on the remote host:3
ssh()  
  # Grab login information
  local -A details
  details=($ =$ (M)$ :-"$ (@f)$(command ssh -G "$@" 2>/dev/null) " :#(host hostname user) * )
  local remote=$ details[host]:-details[hostname] 
  local login=$ details[user] @$ remote 
  # Get password name
  local passname
  case "$login" in
    admin@*.example.net)  passname=company1/ssh/admin ;;
    bernat@*.example.net) passname=company1/ssh/bernat ;;
    backup@*.example.net) passname=company1/ssh/backup ;;
  esac
  # No password name? Just use regular SSH
  [[ -z $passname ]] &&  
    command ssh "$@"
    return $?
   
  # Invoke SSH with the helper for SSH_ASKPASS
  # [ ]
 
It is also possible to make scp invoke our custom ssh function:
scp()  
  set -o localoptions -o localtraps
  local helper=$(mktemp)
  trap "command rm -f $helper" EXIT INT
  > $helper <<EOF 
#!$SHELL
source $ (%):-%x 
ssh "\$@"
EOF
  command scp -S $helper "$@"
 
For the complete code, have a look at my zshrc. As an alternative, you can put the ssh() function body into its own script file and replace command ssh with /usr/bin/ssh to avoid an unwanted recursive call. In this case, the scp() function is not needed anymore.

  1. First, some vendors make it difficult to associate an SSH key with a user. Then, many vendors do not support certificate-based authentication, making it difficult to scale. Finally, interactions between public-key authentication and finer-grained authorization methods like TACACS+ and Radius are still uncharted territory.
  2. The clear-text password never appears on the command line, in the environment, or on the disk, making it difficult for a third party without elevated privileges to capture it. On Linux, Zsh provides the password through a file descriptor.
  3. To decipher the fourth line, you may get help from print -l and the zshexpn(1) manual page. details is an associative array defined from an array alternating keys and values.

2 November 2023

Fran ois Marier: Upgrading from Debian 11 bullseye to 12 bookworm

Over the last few months, I upgraded my Debian machines from bullseye to bookworm. The process was uneventful, but I ended up reconfiguring several things afterwards in order to modernize my upgraded machines.

Logcheck I noticed in this release that the transition to journald is essentially complete. This means that rsyslog is no longer needed on most of my systems:
apt purge rsyslog
Once that was done, I was able to comment out the following lines in /etc/logcheck/logcheck.logfiles.d/syslog.logfiles:
#/var/log/syslog
#/var/log/auth.log
I did have to adjust some of my custom logcheck rules, particularly the ones that deal with kernel messages:
--- a/logcheck/ignore.d.server/local-kernel
+++ b/logcheck/ignore.d.server/local-kernel
@@ -1,1 +1,1 @@
-^\w 3  [ :[:digit:]] 11  [._[:alnum:]-]+ kernel: \[[0-9. ]+]\ IN=eno1 OUT= MAC=[0-9a-f:]+ SRC=[0-9a-f.:]+
+^\w 3  [ :[:digit:]] 11  [._[:alnum:]-]+ kernel: (\[[0-9. ]+]\ )?IN=eno1 OUT= MAC=[0-9a-f:]+ SRC=[0-9a-f.:]+
Then I moved local entries from /etc/logcheck/logcheck.logfiles to /etc/logcheck/logcheck.logfiles.d/local.logfiles (/var/log/syslog and /var/log/auth.log are enabled by default when needed) and removed some files that are no longer used:
rm /var/log/mail.err*
rm /var/log/mail.warn*
rm /var/log/mail.info*
Finally, I had to fix any unescaped characters in my local rules. For example error == NULL \*error == NULL must now be written as error == NULL \ \ \*error == NULL.

Networking After the upgrade, I got a notice that the isc-dhcp-client is now deprecated and so I removed if from my system:
apt purge isc-dhcp-client
This however meant that I need to ensure that my network configuration software does not depend on the now-deprecated DHCP client. On my laptop, I was already using NetworkManager for my main network interfaces and that has built-in DHCP support.

Migration to systemd-networkd On my backup server, I took this opportunity to switch from ifupdown to systemd-networkd by removing ifupdown:
apt purge ifupdown
rm /etc/network/interfaces
putting the following in /etc/systemd/network/20-wired.network:
[Match]
Name=eno1
[Network]
DHCP=yes
MulticastDNS=yes
and then enabling/starting systemd-networkd:
systemctl enable systemd-networkd
systemctl start systemd-networkd
I also needed to install polkit:
apt install --no-install-recommends policykit-1
in order to allow systemd-networkd to set the hostname. In order to start my firewall automatically as interfaces are brought up, I wrote a dispatcher script to apply my existing iptables rules.

Migration to predictacle network interface names On my Linode server, I did the same as on the backup server, but I put the following in /etc/systemd/network/20-wired.network since it has a static IPv6 allocation:
[Match]
Name=enp0s4
[Network]
DHCP=yes
Address=2600:3c01::xxxx:xxxx:xxxx:939f/64
Gateway=fe80::1
and switched to predictable network interface names by deleting these two files:
  • /etc/systemd/network/50-virtio-kernel-names.link
  • /etc/systemd/network/99-default.link
and then changing eth0 to enp0s4 in:
  • /etc/network/iptables.up.rules
  • /etc/network/ip6tables.up.rules
  • /etc/rc.local (for OpenVPN)
  • /etc/logcheck/ignored.d.*/*
Then I regenerated all initramfs:
update-initramfs -u -k all
and rebooted the virtual machine. Giving systemd-resolved control of /etc/resolv.conf After reading this history of DNS resolution on Linux, I decided to modernize my resolv.conf setup and let systemd-resolved handle /etc/resolv.conf. I installed the package:
apt install systemd-resolved
and then removed no-longer-needed packages:
apt purge resolvconf avahi-daemon
I also disabled support for Link-Local Multicast Name Resolution (LLMNR) after reading this person's reasoning by putting the following in /etc/systemd/resolved.conf.d/llmnr.conf:
[Resolve]
LLMNR=no
I verified that mDNS is enabled and LLMNR is disabled:
$ resolvectl mdns
Global: yes
Link 2 (enp0s25): yes
Link 3 (wlp3s0): yes
$ resolvectl llmnr
Global: no
Link 2 (enp0s25): no
Link 3 (wlp3s0): no
Note that if you want auto-discovery of local printers using CUPS, you need to keep avahi-daemon since cups-browsed doesn't support systemd-resolved. You can verify that it works using:
sudo lpinfo --include-schemes dnssd -v

Dynamic DNS I replaced ddclient with inadyn since it doesn't work with no-ip.com anymore, using the configuration I described in an old blog post.

chkrootkit I moved my customizations in /etc/chkrootkit.conf to /etc/chkrootkit/chkrootkit.conf after seeing this message in my logs:
WARNING: /etc/chkrootkit.conf is deprecated. Please put your settings in /etc/chkrootkit/chkrootkit.conf instead: /etc/chkrootkit.conf will be ignored in a future release and should be deleted.

ssh As mentioned in Debian bug#1018106, to silence the following warnings:
sshd[6283]: pam_env(sshd:session): deprecated reading of user environment enabled
I changed the following in /etc/pam.d/sshd:
--- a/pam.d/sshd
+++ b/pam.d/sshd
@@ -44,7 +44,7 @@ session    required     pam_limits.so
 session    required     pam_env.so # [1]
 # In Debian 4.0 (etch), locale-related environment variables were moved to
 # /etc/default/locale, so read that as well.
-session    required     pam_env.so user_readenv=1 envfile=/etc/default/locale
+session    required     pam_env.so envfile=/etc/default/locale
 # SELinux needs to intervene at login time to ensure that the process starts
 # in the proper default security context.  Only sessions which are intended
I also made the following changes to /etc/ssh/sshd_config.d/local.conf based on the advice of ssh-audit 2.9.0:
-KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha256
+KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,sntrup761x25519-sha512@openssh.com,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512

31 October 2023

Iustin Pop: Raspberry PI OS: upgrading and cross-grading

One of the downsides of running Raspberry PI OS is the fact that - not having the resources of pure Debian - upgrades are not recommended, and cross-grades (migrating between armhf and arm64) is not even mentioned. Is this really true? It is, after all a Debian-based system, so it should in theory be doable. Let s try!

Upgrading The recently announced release based on Debian Bookworm here says:
We have always said that for a major version upgrade, you should re-image your SD card and start again with a clean image. In the past, we have suggested procedures for updating an existing image to the new version, but always with the caveat that we do not recommend it, and you do this at your own risk. This time, because the changes to the underlying architecture are so significant, we are not suggesting any procedure for upgrading a Bullseye image to Bookworm; any attempt to do this will almost certainly end up with a non-booting desktop and data loss. The only way to get Bookworm is either to create an SD card using Raspberry Pi Imager, or to download and flash a Bookworm image from here with your tool of choice.
Which means, it s time to actually try it turns out it s actually trivial, if you use RPIs as headless servers. I had only three issues:
  • if using an initrd, the new initrd-building scripts/hooks are looking for some binaries in /usr/bin, and not in /bin; solution: install manually the usrmerge package, and then re-run dpkg --configure -a;
  • also if using an initrd, the scripts are looking for the kernel config file in /boot/config-$(uname -r), and the raspberry pi kernel package doesn t provide this; workaround: modprobe configs && zcat /proc/config.gz > /boot/config-$(uname -r);
  • and finally, on normal RPI systems, that don t use manual configurations of interfaces in /etc/network/interface, migrating from the previous dhcpcd to NetworkManager will break network connectivity, and require you to log in locally and fix things.
I expect most people to hit only the 3rd, and almost no-one to use initrd on raspberry pi. But, overall, aside from these two issues and a couple of cosmetic ones (login.defs being rewritten from scratch and showing a baffling diff, for example), it was easy. Is it worth doing? Definitely. Had no data loss, and no non-booting system.

Cross-grading (32 bit to 64 bit userland) This one is actually painful. Internet searches go from it s possible, I think to it s definitely not worth trying . Examples: Aside from these, there are a gazillion other posts about switching the kernel to 64 bit. And that s worth doing on its own, but it s only half the way. So, armed with two different systems - a RPI4 4GB and a RPI Zero W2 - I tried to do this. And while it can be done, it takes many hours - first system was about 6 hours, second the same, and a third RPI4 probably took ~3 hours only since I knew the problematic issues. So, what are the steps? Basically:
  • install devscripts, since you will need dget
  • enable new architecture in dpkg: dpkg --add-architecture arm64
  • switch over apt sources to include the 64 bit repos, which are different than the 32 bit ones (Raspberry PI OS did a migration here; normally a single repository has all architectures, of course)
  • downgrade all custom rpi packages/libraries to the standard bookworm/bullseye version, since dpkg won t usually allow a single library package to have different versions (I think it s possible to override, but I didn t bother)
  • install libc for the arm64 arch (this takes some effort, it s actually a set of 3-4 packages)
  • once the above is done, install whiptail:amd64 and rejoice at running a 64-bit binary!
  • then painfully go through sets of packages and migrate the set to arm64:
    • sometimes this work via apt, sometimes you ll need to use dget and dpkg -i
    • make sure you download both the armhf and arm64 versions before doing dpkg -i, since you ll need to rollback some installs
  • at one point, you ll be able to switch over dpkg and apt to arm64, at which point the default architecture flips over; from here, if you ve done it at the right moment, it becomes very easy; you ll probably need an apt install --fix-broken, though, at first
  • and then, finish by replacing all packages with arm64 versions
  • and then, dpkg --remove-architecture armhf, reboot, and profit!
But it s tears and blood to get to that point

Pain point 1: RPI custom versions of packages Since the 32bit armhf architecture is a bit weird - having many variations - it turns out that raspberry pi OS has many packages that are very slightly tweaked to disable a compilation flag or work around build/test failures, or whatnot. Since we talk here about 64-bit capable processors, almost none of these are needed, but they do make life harder since the 64 bit version doesn t have those overrides. So what is needed would be to say downgrade all armhf packages to the version in debian upstream repo , but I couldn t find the right apt pinning incantation to do that. So what I did was to remove the 32bit repos, then use apt-show-versions to see which packages have versions that are no longer in any repo, then downgrade them. There s a further, minor, complication that there were about 3-4 packages with same version but different hash (!), which simply needed apt install --reinstall, I think.

Pain point 2: architecture independent packages There is one very big issue with dpkg in all this story, and the one that makes things very problematic: while you can have a library package installed multiple times for different architectures, as the files live in different paths, a non-library package can only be installed once (usually). For binary packages (arch:any), that is fine. But architecture-independent packages (arch:all) are problematic since usually they depend on a binary package, but they always depend on the default architecture version! Hrmm, and I just realise I don t have logs from this, so I m only ~80% confident. But basically:
  • vim-solarized (arch:all) depends on vim (arch:any)
  • if you replace vim armhf with vim arm64, this will break vim-solarized, until the default architecture becomes arm64
So you need to keep track of which packages apt will de-install, for later re-installation. It is possible that Multi-Arch: foreign solves this, per the debian wiki which says:
Note that even though Architecture: all and Multi-Arch: foreign may look like similar concepts, they are not. The former means that the same binary package can be installed on different architectures. Yet, after installation such packages are treated as if they were native architecture (by definition the architecture of the dpkg package) packages. Thus Architecture: all packages cannot satisfy dependencies from other architectures without being marked Multi-Arch foreign.
It also has warnings about how to properly use this. But, in general, not many packages have it, so it is a problem.

Pain point 3: remove + install vs overwrite It seems that depending on how the solver computes a solution, when migrating a package from 32 to 64 bit, it can choose either to:
  • overwrite in place the package (akin to dpkg -i)
  • remove + install later
The former is OK, the later is not. Or, actually, it might be that apt never can do this, for example (edited for brevity):
# apt install systemd:arm64 --no-install-recommends
The following packages will be REMOVED:
  systemd
The following NEW packages will be installed:
  systemd:arm64
0 upgraded, 1 newly installed, 1 to remove and 35 not upgraded.
Do you want to continue? [Y/n] y
dpkg: systemd: dependency problems, but removing anyway as you requested:
 systemd-sysv depends on systemd.
Removing systemd (247.3-7+deb11u2) ...
systemd is the active init system, please switch to another before removing systemd.
dpkg: error processing package systemd (--remove):
 installed systemd package pre-removal script subprocess returned error exit status 1
dpkg: too many errors, stopping
Errors were encountered while processing:
 systemd
Processing was halted because there were too many errors.
But at the same time, overwrite in place is all good - via dpkg -i from /var/cache/apt/archives. In this case it manifested via a prerm script, in other cases is manifests via dependencies that are no longer satisfied for packages that can t be removed, etc. etc. So you will have to resort to dpkg -i a lot.

Pain point 4: lib- packages that are not lib During the whole process, it is very tempting to just go ahead and install the corresponding arm64 package for all armhf lib package, in one go, since these can coexist. Well, this simple plan is complicated by the fact that some packages are named libfoo-bar, but are actual holding (e.g.) the bar binary for the libfoo package. Examples:
  • libmagic-mgc contains /usr/lib/file/magic.mgc, which conflicts between the 32 and 64 bit versions; of course, it s the exact same file, so this should be an arch:all package, but
  • libpam-modules-bin and liblockfile-bin actually contain binaries (per the -bin suffix)
It s possible to work around all this, but it changes a 1 minute:
# apt install $(dpkg -i   grep ^ii   awk ' print $2 ' grep :amrhf sed -e 's/:armhf/:arm64')
into a 10-20 minutes fight with packages (like most other steps).

Is it worth doing? Compared to the simple bullseye bookworm upgrade, I m not sure about this. The result? Yes, definitely, the system feels - weirdly - much more responsive, logged in over SSH. I guess the arm64 base architecture has some more efficient ops than the lowest denominator armhf , so to say (e.g. there was in the 32 bit version some rpi-custom package with string ops), and thus migrating to 64 bit makes more things faster , but this is subjective so it might be actually not true. But from the point of view of the effort? Unless you like to play with dpkg and apt, and understand how these work and break, I d rather say, migrate to ansible and automate the deployment. It s doable, sure, and by the third system, I got this nailed down pretty well, but it was a lot of time spent. The good aspect is that I did 3 migrations:
  • rpi zero w2: bullseye 32 bit to 64 bit, then bullseye to bookworm
  • rpi 4: bullseye to bookworm, then bookworm 32bit to 64 bit
  • same, again, for a more important system
And all three worked well and no data loss. But I m really glad I have this behind me, I probably wouldn t do a fourth system, even if forced And now, waiting for the RPI 5 to be available See you!

Russell Coker: Links October 2023

The Daily Kos has an interesting article about a new more effective method of desalination [1]. Here is a video of a crazy guy zapping things with 100 car batteries [2]. This is sonmething you should avoid if you want to die of natural causes. Does dying while making a science video count for a Darwin Award? A Hacker News comment has an interesting explanation of Unix signals [3]. Interesting documentary on the rise of mega corporations [4]. We need to split up Google, Facebook, and Amazon ASAP. Also every phone platform should have competing app stores. Dave Taht gave an interesting LCA lecture about Internet congestion control [5]. He also referenced a web site about projects to alleviate the buffer bloat problem [6]. This tiny event based sensor is an interesting product [7]. It could lead to some interesting (but possibly invasive) technological developments in phones. Tara Barnett s Everything Open lecture Swiss Army GLAM had some interesting ideas for community software development [8]. Having lots of small programs communicating with APIs is an interesting way to get people into development. Actually Hardcore Overclocking has an interesting youtube video about the differences between x8 and x14 DDR4 DIMMs [9]. Interesting YouTube video from someone who helped the Kurds defend against Turkey about how war tunnels work [10]. He makes a strong case that the Israeli invasion of the Gaza Strip won t be easy or pleasant.

22 October 2023

Jamie McClelland: Users without passwords

About fifteen years ago, while debugging a database probem, I was horrified to discover that we had two root users - one with the password I had been using and one without a password. Nooo! So, I wrote a simple maintenance script that searched for and deleted any user in our database without a password. I even made it part of our puppet recipe - since the database server was in use by users and I didn t want anyone using SQL statements to change their password to an empty value. Then I forgot about it. Recently, I upgraded our MariaDB databases to Debian bullseye, which inserted the mariadb.sys user which . doesn t have a password set. It seems to be locked down in other ways, but my dumb script didn t know about that and happily deleted the user. Who needs that mariadb.sys user anyway? Apparently we all do. On one server, I can t login as root anymore. On another server I can login as root, but if I try to list users I get an error:
ERROR 1449 (HY000): The user specified as a definer ( mariadb.sys @ localhost ) does not exist
The Internt is full of useless advice. The most common is to simply insert that user. Except
MariaDB [mysql]> CREATE USER  mariadb.sys @ localhost  ACCOUNT LOCK PASSWORD EXPIRE;
ERROR 1396 (HY000): Operation CREATE USER failed for 'mariadb.sys'@'localhost'
MariaDB [mysql]> 
Yeah, that s not going to work. It seems like we are dealing with two changes. One, the old mysql.user table was replaced by the global_priv table and then turned into a view for backwards compatibility. And two, for sensible reasons the default definer for this view has been changed from the root user to a user that, ahem, is unlikely to be changed or deleted. Apparently I can t add the mariadb.sys user because it would alter the user view which has a definer that doesn t exist. Although not sure if this really is the reason? Fortunately, I found an excellent suggestion for changing the definer of a view. My modified version of the answer is, run the following command which will generate a SQL statement:
SELECT CONCAT("ALTER DEFINER=root@localhost VIEW ", table_name, " AS ", view_definition, ";") FROM information_schema.views WHERE table_schema='mysql' AND definer = 'mariadb.sys@localhost';
Then, execute the statement. And then also update the mysql.proc table:
UPDATE mysql.proc SET definer = 'root@localhost' WHERE definer = 'mariadb.sys@localhost';
And lastly, I had to run:
DELETE FROM tables_priv WHERE User = 'mariadb.sys';
FLUSH privileges;
Wait, was the tables_priv entry the whole problem all along? Not sure. But now I can run:
CREATE USER  mariadb.sys @ localhost  ACCOUNT LOCK PASSWORD EXPIRE;
GRANT SELECT, DELETE ON  mysql . global_priv  TO  mariadb.sys @ localhost ;
And reverse the other statements:
SELECT CONCAT("ALTER DEFINER= mariadb.sys @localhost VIEW ", table_name, " AS ", view_definition, ";") FROM information_schema.views WHERE table_schema='mysql' AND definer = 'root@localhost';
[Execute the output.]
UPDATE mysql.proc SET definer = 'mariadb.sys@localhost' WHERE definer = 'root@localhost';
And while we re on the topic of borked MariaDB authentication, here are the steps to change the root password and restore all root privielges if you can t get in at all or your root user is missing the GRANT OPTION (you can change ALTER to CREATE if the root user does not even exist):
systemctl stop mariadb
mariadbd-safe --skip-grant-tables --skip-networking &
mysql -u root
[mysql]> FLUSH PRIVILEGES
[mysql]> ALTER USER  root @ localhost  IDENTIFIED VIA mysql_native_password USING PASSWORD('your-secret-password') OR unix_socket; 
[mysql]> GRANT ALL PRIVILEGES ON *.* to 'root'@'localhost' WITH GRANT OPTION;
mariadbd-admin shutdown
systemctl start mariadb

14 October 2023

Fran ois Marier: Enabling AppArmor on a Linode VPS in enforcement mode

Enabling AppArmor on a Debian Linode VPS is not entirely straightforward. Here's what I had to do in order to make it work.

Packages to install The easy bit was to install a few packages:
apt install grub2 apparmor-profiles-extra apparmor-profiles apparmor
and then adding apparmor=1 security=apparmor to the kernel command line (GRUB_CMDLINE_LINUX) in /etc/default/grub.

Move away from using Linode's kernels As mentioned in this blog post, I found out that these parameters are ignored by the Linode kernels. I had to:
  1. login to the Linode Manager (i.e. https://cloud.linode.com/linodes/<linode ID>/configurations),
  2. click the node relevant node,
  3. click "Edit" next to the configuration profile, and
  4. change the kernel to "GRUB 2".

Fix grub Next I found out that grub doesn't actually install itself properly because it can't be installed directly on the virtual drives provided by Linode (KVM). Manually running this hack worked for me:
grub-install --grub-setup=/bin/true /dev/null

Unbound + Let's Encrypt fix Finally, my local Unbound installation stopped working because it couldn't access the Let's Encrypt certificates anymore. The solution to this was pretty straightforward. All I needed to do was to add the following to /etc/apparmor.d/local/usr.sbin.unbound:
/etc/letsencrypt/archive/** r,
/etc/letsencrypt/live/** r,

Next.

Previous.